RE: [-empyre-] next (robotic) steps




> -----Original Message-----
> From: empyre-bounces@lists.cofa.unsw.edu.au
> [mailto:empyre-bounces@lists.cofa.unsw.edu.au]On Behalf Of Nicholas
> Stedman
> Sent: Friday, December 17, 2004 7:31 PM
> To: soft_skinned_space
> Subject: Re: [-empyre-] next (robotic) steps
>
>
> >what is the next most essential 'human' sense for a robot,
> >in your opinion - artificial life - artificial intelligence
> >- ethical robots - what would *your* priorities be?
>
>
> Hello,
> This is my first post to empyre.
> In the last few years the idea has become quite popular that
> intelligence is a complex system that arises from the interaction of
> small, discreet particles performing simple tasks in relation to one
> another. Very recently I've been hearing about 'dumb' machines. This
> seems to remove the overarching goal of emulating complex behaviour, and
> refocuses the energy on just exploring simple, 'dumb' tasks.
> 'Intelligence, ethics and human priorities' it is said are far too
> complex to represent through current technology, let alone understand
> for ourselves. The New York Times Magazine has a small blurb on it this
> week (you can read it online but have to subscribe). If anyone knows
> about this topic it would be good to hear more.
>
> My questions to you then are what do you think of approaching machines
> as 'dumb', as unlike us? Why use mimesis as a guiding principle in
> robotic design instead of other principles and references, or are they
> by definition mimetic? I ask because I'm currently confronting this in
> my own work. Some of my machines to date have been explorations of the
> boundary between things that are like us and things that aren't by
> rolling them into the same object. Lately, I've been more curious about
> machines that are complementary instead of similar to us, but these are
> not necessarilly mutually exclusive ideas.
>
> Best,
> Nicholas Stedman
> http://nickstedman.banff.org

I wonder how this relates to the earlier observation about the 'animal'? As
a species, there was certainly a time when our language capabilities did not
include speech, or very much at all of conscious symbolic understanding, and
that is recapitulated, to some extent, as we grow from babes.

How important is the notion of a 'world view' to the 'animal' or the 'dumb'?
Not that they need be the same thing, of course.

"...a world view...is essentially a model of the world as the preson
perceives it. the model contains information about all objects which are
known to exist, the attributes of those objects and the relationships
between them....To endow machines with similarly huge and intricate models
of the world appears very difficult, and is certainly well byond our current
capabilities (1988). One reason for this, which may be short-lived, is that
the capacity of the largest computer memories is far less than that of the
human brain. A more significant reason  is that no generally convenient
method of representing  knowledge in a computer has yet been discovered. The
major problem is not in storing knowledge but in recognizing when particular
items are relevant, and in retrieving them as required. A simple list of
facts is inadequate: the relationships between facts are of crucial
importance. Attempts to represent such relationships by complex data
structures always seem to founder on the same two difficulties.  The first
is to know what relationships are relevant. The second is the time taken to
locate and retrieve all information relevant to the problem at hand. This
time rapidly becomes infeasible as the body of knowledge increases."
from 'Computer Science--A Modern [1988] Introduction' by Goldschlager and
Lister

ja
http://vispo.com






This archive was generated by a fusion of Pipermail 0.09 (Mailman edition) and MHonArc 2.6.8.